4,526 research outputs found

    Secure and linear cryptosystems using error-correcting codes

    Full text link
    A public-key cryptosystem, digital signature and authentication procedures based on a Gallager-type parity-check error-correcting code are presented. The complexity of the encryption and the decryption processes scale linearly with the size of the plaintext Alice sends to Bob. The public-key is pre-corrupted by Bob, whereas a private-noise added by Alice to a given fraction of the ciphertext of each encrypted plaintext serves to increase the secure channel and is the cornerstone for digital signatures and authentication. Various scenarios are discussed including the possible actions of the opponent Oscar as an eavesdropper or as a disruptor

    Secure exchange of information by synchronization of neural networks

    Full text link
    A connection between the theory of neural networks and cryptography is presented. A new phenomenon, namely synchronization of neural networks is leading to a new method of exchange of secret messages. Numerical simulations show that two artificial networks being trained by Hebbian learning rule on their mutual outputs develop an antiparallel state of their synaptic weights. The synchronized weights are used to construct an ephemeral key exchange protocol for a secure transmission of secret data. It is shown that an opponent who knows the protocol and all details of any transmission of the data has no chance to decrypt the secret message, since tracking the weights is a hard problem compared to synchronization. The complexity of the generation of the secure channel is linear with the size of the network.Comment: 11 pages, 5 figure

    Genetic attack on neural cryptography

    Full text link
    Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold for the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.Comment: 8 pages, 12 figures; section 5 amended, typos correcte

    Statistical mechanical aspects of joint source-channel coding

    Full text link
    An MN-Gallager Code over Galois fields, qq, based on the Dynamical Block Posterior probabilities (DBP) for messages with a given set of autocorrelations is presented with the following main results: (a) for a binary symmetric channel the threshold, fcf_c, is extrapolated for infinite messages using the scaling relation for the median convergence time, tmed1/(fcf)t_{med} \propto 1/(f_c-f); (b) a degradation in the threshold is observed as the correlations are enhanced; (c) for a given set of autocorrelations the performance is enhanced as qq is increased; (d) the efficiency of the DBP joint source-channel coding is slightly better than the standard gzip compression method; (e) for a given entropy, the performance of the DBP algorithm is a function of the decay of the correlation function over large distances.Comment: 6 page

    Cryptography based on neural networks - analytical results

    Full text link
    Mutual learning process between two parity feed-forward networks with discrete and continuous weights is studied analytically, and we find that the number of steps required to achieve full synchronization between the two networks in the case of discrete weights is finite. The synchronization process is shown to be non-self-averaging and the analytical solution is based on random auxiliary variables. The learning time of an attacker that is trying to imitate one of the networks is examined analytically and is found to be much longer than the synchronization time. Analytical results are found to be in agreement with simulations

    Synchronization with mismatched synaptic delays: A unique role of elastic neuronal latency

    Full text link
    We show that the unavoidable increase in neuronal response latency to ongoing stimulation serves as a nonuniform gradual stretching of neuronal circuit delay loops and emerges as an essential mechanism in the formation of various types of neuronal timers. Synchronization emerges as a transient phenomenon without predefined precise matched synaptic delays. These findings are described in an experimental procedure where conditioned stimulations were enforced on a circuit of neurons embedded within a large-scale network of cortical cells in-vitro, and are corroborated by neuronal simulations. They evidence a new cortical timescale based on tens of microseconds stretching of neuronal circuit delay loops per spike, and with realistic delays of a few milliseconds, synchronization emerges for a finite fraction of neuronal circuit delays.Comment: 12 pages, 4 figures, 13 pages of Supplementary materia

    Finite size effects and error-free communication in Gaussian channels

    Get PDF
    The efficacy of a specially constructed Gallager-type error-correcting code to communication in a Gaussian channel is being examined. The construction is based on the introduction of complex matrices, used in both encoding and decoding, which comprise sub-matrices of cascading connection values. The finite size effects are estimated for comparing the results to the bounds set by Shannon. The critical noise level achieved for certain code-rates and infinitely large systems nearly saturates the bounds set by Shannon even when the connectivity used is low

    Coloring random graphs

    Full text link
    We study the graph coloring problem over random graphs of finite average connectivity cc. Given a number qq of available colors, we find that graphs with low connectivity admit almost always a proper coloring whereas graphs with high connectivity are uncolorable. Depending on qq, we find the precise value of the critical average connectivity cqc_q. Moreover, we show that below cqc_q there exist a clustering phase c[cd,cq]c\in [c_d,c_q] in which ground states spontaneously divide into an exponential number of clusters and where the proliferation of metastable states is responsible for the onset of complexity in local search algorithms.Comment: 4 pages, 1 figure, version to app. in PR

    Dynamics of neural cryptography

    Full text link
    Synchronization of neural networks has been used for novel public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.Comment: 9 pages, 15 figures; typos correcte

    Mean Field Behavior of Cluster Dynamics

    Full text link
    The dynamic behavior of cluster algorithms is analyzed in the classical mean field limit. Rigorous analytical results below TcT_c establish that the dynamic exponent has the value zsw=1z_{sw}=1 for the Swendsen-Wang algorithm and zuw=0z_{uw}=0 for the Wolff algorithm. An efficient Monte Carlo implementation is introduced, adapted for using these algorithms for fully connected graphs. Extensive simulations both above and below TcT_c demonstrate scaling and evaluate the finite-size scaling function by means of a rather impressive collapse of the data.Comment: Revtex, 9 pages with 7 figure
    corecore